Named-entity recognition

Named-entity recognition (NER) (also known as entity identification and entity extraction) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.

Most research on NER systems has been structured as taking an unannotated block of text, such as this one:

Jim bought 300 shares of Acme Corp. in 2006.

And producing an annotated block of text, such as this one:

<ENAMEX TYPE="PERSON">Jim</ENAMEX> bought <NUMEX TYPE="QUANTITY">300</NUMEX> shares of <ENAMEX TYPE="ORGANIZATION">Acme Corp.</ENAMEX> in <TIMEX TYPE="DATE">2006</TIMEX>.

In this example, the annotations have been done using so-called ENAMEX tags that were developed for the Message Understanding Conference in the 1990s.

State-of-the-art NER systems for English produce near-human performance. For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%.[1][2] These algorithms had roughly twice the error rate (6.61%) of human annotators (2.40% and 3.05%).

Contents

Approaches

NER systems have been created that use linguistic grammar-based techniques as well as statistical models. Hand-crafted grammar-based systems typically obtain better precision, but at the cost of lower recall and months of work by experienced computational linguists. Statistical NER systems typically require a large amount of manually annotated training data.

Problem domains

Research indicates that even state-of-the-art NER systems are brittle, meaning that NER systems developed for one domain do not typically perform well on other domains.[3] Considerable effort is involved in tuning NER systems to perform well in a new domain; this is true for both rule-based and trainable statistical systems.

Early work in NER systems in the 1990s was aimed primarily at extraction from journalistic articles. Attention then turned to processing of military dispatches and reports. Later stages of the automatic content extraction (ACE) evaluation also included several types of informal text styles, such as weblogs and text transcripts from conversational telephone speech conversations. Since about 1998, there has been a great deal of interest in entity identification in the molecular biology, bioinformatics, and medical natural language processing communities. The most common entity of interest in that domain has been names of genes and gene products.

Named entity types

In the expression named entity, the word named restricts the task to those entities for which one or many rigid designators, as defined by Kripke, stands for the referent. For instance, the automotive company created by Henry Ford in 1903 is referred to as Ford or Ford Motor Company. Rigid designators include proper names as well as certain natural kind terms like biological species and substances.

There is a general agreement to include temporal expressions and some numerical expressions (i.e., money, percentages, etc.) as instances of named entities in the context of the NER task. While some instances of these types are good examples of rigid designators (e.g., the year 2001) there are also many invalid ones (e.g., I take my vacations in “June”). In the first case, the year 2001 refers to the 2001st year of the Gregorian calendar. In the second case, the month June may refer to the month of an undefined year (past June, next June, June 2020, etc.). It is arguable that the named entity definition is loosened in such cases for practical reasons. The definition of the term named entity is therefore not strict and often has to be explained in the context it is used[4].

At least two hierarchies of named entity types have been proposed in the literature. BBN categories, proposed in 2002, is used for Question Answering and consists of 29 types and 64 subtypes.[5] Sekine's extended hierarchy, proposed in 2002, is made of 200 subtypes.[6]

Current Challenges and Research Trends

Despite the high F1 numbers reported on the MUC-7 dataset, the problem of Named Entity Recognition is far from being solved. The main efforts are directed to reducing the annotation labor [7] [8], robust performance across domains[9] [10] and scaling up to fine-grained entity types. [11] [12].

A recently emerging task of identifying "important expressions" in text and cross-linking them to Wikipedia [13] [14][15] can be seen as an instance of extremely fine-grained named entity recognition, where the types are the actual Wikipedia pages describing the (potentially ambiguous) concepts. Below is an example output of a Wikification system:

<ENTITY> http://en.wikipedia.org/wiki/Michael_I._Jordan Michael Jordan </ENTITY> is a professor at <ENTITY> http://en.wikipedia.org/wiki/University_of_California,_Berkeley Berkeley </ENTITY>

Available Systems

Several systems are available online. For traditional NER, the most popular publicly available systems are: Illinois NER system , Stanford NER system, and Lingpipe NER system. The Illinois NER reports 90.6 F1 on the CoNLL03 NER shared task data and the Stanford NER reports 86.86 F1 [16] [17].

There are also several publicly available Wikification systems for identifying important expressions in the text and cross-linking them to Wikipedia. Most notably, Illinois Wikification system WM Wikifier and TAGME .

NER evaluation forums

Evaluation of NER systems is critical to scientific progress of this field.

Most evaluation of these systems has been performed at conferences or contests put on by government organizations, sometimes acting in concert with contractors or academics.

Conference Acronym Language(s) Year(s) Sponsor Archive Site
Message Understanding Conference MUC English 1987–1999 DARPA [1]
Multilingual Entity Task Conference MET Chinese and Japanese 1998 US [2]
Automatic Content Extraction Program ACE English 2000–2008 NIST [3]
Conference on Computational Natural Language Learning CoNLL Spanish and Dutch / German and English 2002–2003 [4]
Evaluation contest for named entity recognizers in Portuguese HAREM Portuguese 2004–2008 Linguateca [5]
Information Retrieval and Extraction Exercise IREX Japanese 1998–1999 [6]
ACL Special Interest Group in Chinese SIGHan Chinese 2006 [7]
TAC Knowledge Base Population Evaluation TAC/KBP English 2009– NIST [8]

References

  1. ^ Elaine Marsh, Dennis Perzanowski, "MUC-7 Evaluation of IE Technology: Overview of Results", 29 April 1998 PDF
  2. ^ MUC-07 Proceedings (Named Entity Tasks)
  3. ^ Poibeau, Thierry and Kosseim, L. (2001) Proper Name Extraction from Non-Journalistic Texts. Proc. Computational Linguistics in the Netherlands.
  4. ^ http://www.webknox.com/blog/2010/09/named-entity-definition/
  5. ^ http://www.ldc.upenn.edu/Catalog/docs/LDC2005T33/BBN-Types-Subtypes.html
  6. ^ http://nlp.cs.nyu.edu/ene/
  7. ^ http://cogcomp.cs.illinois.edu/papers/TurianRaBe2010.pdf Word representations: A simple and general method for semi-supervised learning.
  8. ^ http://aclweb.org/anthology/P/P09/P09-1116.pdf Phrase Clustering for Discriminative Learning.
  9. ^ http://cogcomp.cs.illinois.edu/papers/RatinovRo09.pdf Design Challenges and Misconceptions in Named Entity Recognition.
  10. ^ http://www.cs.utah.edu/~hal/docs/daume07easyadapt.pdf Frustratingly Easy Domain Adaptation.
  11. ^ http://www.springerlink.com/content/q30m40611u821m2n Fine-Grained Named Entity Recognition Using Conditional Random Fields for Question Answering.
  12. ^ http://nlp.cs.nyu.edu/ene/
  13. ^ http://www.cs.unt.edu/~rada/papers/mihalcea.cikm07.pdf Linking Documents to Encyclopedic Knowledge.
  14. ^ http://www.cs.waikato.ac.nz/~dnk2/publications/CIKM08-LearningToLinkWithWikipedia.pdf Learning to link with Wikipedia.
  15. ^ http://cogcomp.cs.illinois.edu/papers/RRDA11.pdf Local and Global Algorithms for Disambiguation to Wikipedia.
  16. ^ http://cogcomp.cs.illinois.edu/papers/RatinovRo09.pdf
  17. ^ http://nlp.stanford.edu/~manning/papers/gibbscrf3.pdf

See also

External links